517 research outputs found

    Apparent Opacity Affects Perception of Structure from Motion

    Get PDF
    The judgment of surface attributes such as transparency or opacity is often considered to be a higher-level visual process that would make use of low-level stereo or motion information to tease apart the transparent from the opaque parts. In this study, we describe a new illusion and some results that question the above view by showing that depth from transparency and opacity can override the rigidity bias in perceiving depth from motion. This provides support for the idea that the brain's computation of the surface material attribute of transparency may have to be done either before, or in parallel with the computation of structure from motion

    Evidence in Human Subjects for Independent Coding of Azimuth and Elevation for Direction of Heading from Optic Flow

    Get PDF
    AbstractWe studied the accuracy of human subjects in perceiving the direction of self-motion from optic flow, over a range of directions contained in a 45 deg cone whose vertex was at the viewpoint. Translational optic flow fields were generated by displaying brief sequences (<1.0 sec) of randomly positioned dots expanding in a radial fashion. Subjects were asked to indicate the direction of perceived self-motion at the end of the display. The data were analyzed by factoring out the constant component of the error by means of a linear regression analysis performed on the azimuthal and elevational components of the settings. The analysis of the variable error revealed that: a) the variance of the settings is 3–45% greater along elevation than azimuth for five observers; b) azimuth and elevation correspond, on average, to the principal components of the error in the settings; c) there are differences in the variances of azimuthal and elevational errors between upper and lower visual fields. Moreover, the distribution of the errors for azimuth and elevation in the upper and lower hemifields is not the same. All of the above evidence supports the hypothesis that heading information is represented centrally in terms of its azimuthal and elevational components. Copyright © 1996 Elsevier Science Ltd

    Consequences of polar form coherence for fMRI responses in human visual cortex

    Get PDF
    AbstractRelevant features in the visual image are often spatially extensive and have complex orientation structure. Our perceptual sensitivity to such spatial form is demonstrated by polar Glass patterns, in which an array of randomly-positioned dot pairs that are each aligned with a particular polar displacement (rotation, for example) yield a salient impression of spatial structure. Such patterns are typically considered to be processed in two main stages: local spatial filtering in low-level visual cortex followed by spatial pooling and complex form selectivity in mid-level visual cortex. However, it remains unclear both whether reciprocal interactions within the cortical hierarchy are involved in polar Glass pattern processing and which mid-level areas identify and communicate polar Glass pattern structure. Here, we used functional magnetic resonance imaging (fMRI) at 7T to infer the magnitude of neural response within human low-level and mid-level visual cortex to polar Glass patterns of varying coherence (proportion of signal elements). The activity within low-level visual areas V1 and V2 was not significantly modulated by polar Glass pattern coherence, while the low-level area V3, dorsal and ventral mid-level areas, and the human MT complex each showed a positive linear coherence response functions. The cortical processing of polar Glass patterns thus appears to involve primarily feedforward communication of local signals from V1 and V2, with initial polar form selectivity reached in V3 and distributed to multiple pathways in mid-level visual cortex

    Why the visual recognition system might encode the effects of illumination

    Get PDF
    A key problem in recognition is that the image of an object depends on the lighting conditions. We investigated whether recognition is sensitive to illumination using 3-D objects that were lit from either the left or right, varying both the shading and the cast shadows. In experiments 1 and 2 participants judged whether two sequentially presented objects were the same regardless of illumination. Experiment 1 used six objects that were easily discriminated and that were rendered with cast shadows. While no cost was found in sensitivity, there was a response time cost over a change in lighting direction. Experiment 2 included six additional objects that were similar to the original six objects making recognition more difficult. The objects were rendered with cast shadows, no shadows, and as a control, white shadows. With normal shadows a change in lighting direction produced costs in both sensitivity and response times. With white shadows there was a much larger cost in sensitivity and a comparable cost in response times. Without cast shadows there was no cost in either measure, but the overall performance was poorer. Experiment 3 used a naming task in which names were assigned to six objects rendered with cast shadows. Participants practised identifying the objects in two viewpoints lit from a single lighting direction. Viewpoint and illumination invariance were then tested over new viewpoints and illuminations. Costs in both sensitivity and response time were found for naming the familiar objects in unfamiliar lighting directions regardless of whether the viewpoint was familiar or unfamiliar. Together these results suggest that illumination effects such as shadow edges: (1) affect visual memory; (2) serve the function of making unambigous the three-dimensional shape

    Bootstrapped learning of novel objects

    Get PDF
    Recognition of familiar objects in cluttered backgrounds is a challenging computational problem. Camouflage provides a particularly striking case, where an object is difficult to detect, recognize, and segment even when in &quot;plain view.&quot; Current computational approaches combine low-level features with high-level models to recognize objects. But what if the object is unfamiliar? A novel camouflaged object poses a paradox: A visual system would seem to require a model of an object&apos;s shape in order to detect, recognize, and segment it when camouflaged. But, how is the visual system to build such a model of the object without easily segmentable samples? One possibility is that learning to identify and segment is opportunistic in the sense that learning of novel objects takes place only when distinctive clues permit object segmentation from background, such as when target color or motion enables segmentation on single presentations. We tested this idea and discovered that, on the contrary, human observers can learn to identify and segment a novel target shape, even when for any given training image the target object is camouflaged. Further, perfect recognition can be achieved without accurate segmentation. We call the ability to build a shape model from high-ambiguity presentations bootstrapped learning

    Learning to See Random-Dot Stereograms

    Get PDF
    • …
    corecore